105 research outputs found

    Universal Image Steganalytic Method

    Get PDF
    In the paper we introduce a new universal steganalytic method in JPEG file format that is detecting well-known and also newly developed steganographic methods. The steganalytic model is trained by MHF-DZ steganographic algorithm previously designed by the same authors. The calibration technique with the Feature Based Steganalysis (FBS) was employed in order to identify statistical changes caused by embedding a secret data into original image. The steganalyzer concept utilizes Support Vector Machine (SVM) classification for training a model that is later used by the same steganalyzer in order to identify between a clean (cover) and steganographic image. The aim of the paper was to analyze the variety in accuracy of detection results (ACR) while detecting testing steganographic algorithms as F5, Outguess, Model Based Steganography without deblocking, JP Hide&Seek which represent the generally used steganographic tools. The comparison of four feature vectors with different lengths FBS (22), FBS (66) FBS(274) and FBS(285) shows promising results of proposed universal steganalytic method comparing to binary methods

    Human Visual System Models in Digital Image Watermarking

    Get PDF
    In this paper some Human Visual System (HVS) models used in digital image watermarking are presented. Four different HVS models, which exploit various properties of human eye, are described. Two of them operate in transform domain of Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT). HVS model in DCT domain consists of Just Noticeable Difference thresholds for corresponding DCT basis functions corrected by luminance sensitivity and self- or neighborhood contrast masking. HVS model in DWT domain is based on different HVS sensitivity in various DWT subbands. The third presented HVS model is composed of contrast thresholds as a function of spatial frequency and eye's eccentricity. We present also a way of combining these three basic models to get better tradeoff between conflicting requirements of digital watermarks. The fourth HVS model is based on noise visibility in an image and is described by so called Noise Visibility Function (NVF). The possible ways of exploiting of the described HVS models in digital image watermarking are also briefly discussed

    Codebook Code Division Multiple Access Image Steganography

    Get PDF
    In this paper, a new modification of spread spectrum image steganography (SSIS) is presented. The proposed modification of SSIS hides and recovers a message of substantial length within digital image while maintaining the original image size and dynamic range. An embedded message can be in the form of text, image, or any other digital signal. Our method is based on CDMA SSIS technique. To increase the information capacity of the stego channel and decrease a distortion of a cover image, a new modification of CDMA using a codebook (in the following referred to as Codebook CDMA (CCDMA)) is suggested

    Using of Hand Geometry in Biometric Security Systems

    Get PDF
    In this paper, biometric security system for access control based on hand geometry is presented. Biometric technologies are becoming the foundation of an extensive array of highly secure identification and personal verification solutions. Experiments show that the physical dimensions of a human hand contain information that is capable to verify the identity of an individual. The database created for our system consists of 408 hand images from 24 people of young ages and different sex. Different pattern recognition techniques have been tested to be used for verification. Achieved experimental results FAR=0,1812% and FRR=14,583% show the possibilities of using this system in environment with medium security level with full acceptance from all users

    Implementations of HVS Models in Digital Image Watermarking

    Get PDF
    In the paper two possible implementations of Human Visual System (HVS) models in digital watermarking of still images are presented. The first method performs watermark embedding in transform domain of Discrete Cosine Transform (DCT) and the second method is based on Discrete Wavelet Transform (DWT). Both methods use HVS models to select perceptually significant transform coefficients and at the same time to determine the bounds of modification of selected coefficients in watermark embedding process. The HVS models in DCT and DWT domains consist of three parts which exploit various properties of human eye. The first part is the HVS model in DCT (DWT) domain based on three basic properties of human vision: frequency sensitivity, luminance sensitivity and masking effects. The second part is the HVS model based on Region of Interest (ROI). It is composed of contrast thresholds as a function of spatial frequency and eye\'s eccentricity. The third part is the HVS model based on noise visibility in an image and is described by so called Noise Visibility Function (NVF). Watermark detection is performed without use of original image and watermarks have a form of real number sequences with normal distribution zero mean and unit variance. The robustness of presented perceptual watermarking methods against various types of attacks is also briefly discussed

    Digital Watermarking in Wavelet Transform Domain

    Get PDF
    This paper presents a technique for the digital watermarking of still images based on the wavelet transform. The watermark (binary image) is embedded into original image in its wavelet domain. The original unmarked image is required for watermark extraction. The method of embedding of digital watermarks in wavelet transform domain was analyzed and verified on grey scale static images

    Nonequilibrium effects in DNA microarrays: a multiplatform study

    Full text link
    It has recently been shown that in some DNA microarrays the time needed to reach thermal equilibrium may largely exceed the typical experimental time, which is about 15h in standard protocols (Hooyberghs et al. Phys. Rev. E 81, 012901 (2010)). In this paper we discuss how this breakdown of thermodynamic equilibrium could be detected in microarray experiments without resorting to real time hybridization data, which are difficult to implement in standard experimental conditions. The method is based on the analysis of the distribution of fluorescence intensities I from different spots for probes carrying base mismatches. In thermal equilibrium and at sufficiently low concentrations, log I is expected to be linearly related to the hybridization free energy ΔG\Delta G with a slope equal to 1/RTexp1/RT_{exp}, where TexpT_{exp} is the experimental temperature and R is the gas constant. The breakdown of equilibrium results in the deviation from this law. A model for hybridization kinetics explaining the observed experimental behavior is discussed, the so-called 3-state model. It predicts that deviations from equilibrium yield a proportionality of logI\log I to ΔG/RTeff\Delta G/RT_{eff}. Here, TeffT_{eff} is an effective temperature, higher than the experimental one. This behavior is indeed observed in some experiments on Agilent arrays. We analyze experimental data from two other microarray platforms and discuss, on the basis of the results, the attainment of equilibrium in these cases. Interestingly, the same 3-state model predicts a (dynamical) saturation of the signal at values below the expected one at equilibrium.Comment: 27 pages, 9 figures, 1 tabl

    Physico-chemical foundations underpinning microarray and next-generation sequencing experiments

    Get PDF
    Hybridization of nucleic acids on solid surfaces is a key process involved in high-throughput technologies such as microarrays and, in some cases, next-generation sequencing (NGS). A physical understanding of the hybridization process helps to determine the accuracy of these technologies. The goal of a widespread research program is to develop reliable transformations between the raw signals reported by the technologies and individual molecular concentrations from an ensemble of nucleic acids. This research has inputs from many areas, from bioinformatics and biostatistics, to theoretical and experimental biochemistry and biophysics, to computer simulations. A group of leading researchers met in Ploen Germany in 2011 to discuss present knowledge and limitations of our physico-chemical understanding of high-throughput nucleic acid technologies. This meeting inspired us to write this summary, which provides an overview of the state-of-the-art approaches based on physico-chemical foundation to modeling of the nucleic acids hybridization process on solid surfaces. In addition, practical application of current knowledge is emphasized

    Arrested spinodal decomposition in polymer brush collapsing in poor solvent

    Get PDF
    We study the Brownian dynamics of flexible and semiflexible polymer chains densely grafted on a flat substrate, upon rapid quenching of the system when the quality of solvent becomes poor and chains attempt collapse into a globular state. The collapse process of such a polymer brush differs from individual chains, both in its kinetics and its structural morphology. We find that the resulting collapsed brush does not form a homogeneous dense layer, in spite of all chain monomers equally attracting each other via a model Lennard-Jones potential. Instead, a very distinct inhomogeneous density distribution in the plane forms, with a characteristic length scale dependent on the quenching depth (or equivalently, the strength of monomer attraction) and the geometric parameters of the brush. This structure is identical to the spinodal-decomposition structure, however, due to the grafting constraint we find no subsequent coarsening: the established random bundling with characteristic periodicity remains as the apparently equilibrium structure. We compare this finding with a recent field-theoretical model of bundling in a semiflexible polymer brush.This work was funded by the Osk. Huttunen Foundation (Finland) and the Cambridge Theory of Condensed Matter Grant from EPSRC. Simulations were performed using the Darwin supercomputer of the University of Cambridge High Performance Computing Service provided by Dell Inc. using Strategic Research Infrastructure funding from the Higher Education Funding Council for England.This is the accepted manuscript. The final version is available at http://pubs.acs.org/doi/abs/10.1021/ma501985r
    corecore